73 research outputs found

    The Organ Transcriptions of Jean Guillou

    Get PDF
    This paper will analyze five organ transcriptions by Jean Guillou. The excerpts to be analyzed include: the six voice ricercare from Bach's Musical Offering, BWV 1079; Mozart's Adagio and Fugue in C minor K.546; Liszt's Orpheus; Mussorgsky's "La grande porte de Kiev" from Pictures at an Exhibition; and Prokofiev's Toccata Op.11. This analysis will involve comparing the transcriptions with the original and discussing how Guillou has altered the pieces, including any musical elements that are changed, added, or subtracted. These elements may include notes, rhythms, dynamics, registration, or articulation. The document will also explain how the changes facilitate the performance of these works on the organ. In summary, we will see that Guillou's transcription of the Bach and Mozart works are based rather strictly on the original. The Prokofiev transcription is slightly more adventurous, while in the Liszt and Mussorgsky transcriptions, Guillou actually develops something that looks almost like a new piece, featuring enhanced virtuosity by including running notes, complex harmonies, and creative registrations. The document will demonstrate that this flexibility in approach to the process of transcription is a hallmark of Guillou's style

    Analysing system behaviour by automatic benchmarking of system-level provenance

    Get PDF
    Provenance is a term originating from the work of art. It aims to provide a chain of information of a piece of arts from its creation to the current status. It records all the historic information relating to this piece of art, including the storage locations, ownership, buying prices, etc. until the current status. It has a very similar definition in data processing and computer science. It is used as the lineage of data in computer science to provide either reproducibility or tracing of activities happening in runtime for a different purpose. Similar to the provenance used in art, provenance used in computer science and data processing field describes how a piece of data was created, passed around, modified, and reached the current state. Also, it provides information on who is responsible for certain activities and other related information. It acts as metadata on components in a computer environment. As the concept of provenance is to record all related information of some data, the size of provenance itself is generally proportional to the amount of data processing that took place. It generally tends to be a large set of data and is hard to analyse. Also, in the provenance collecting process, not all information is useful for all purposes. For example, if we just want to trace all previous owners of a file, then all the storage location information may be ignored. To capture useful information and without needing to handle a large amount of information, researchers and developers develop different provenance recording tools that only record information needed by particular applications with different means and mechanisms throughout the systems. This action allows a lighter set of information for analysis but it results in non-standard provenance information and general users may not have a clear view on which tools are better for some purposes. For example, if we want to identify if certain action sequences have been performed in a process and who is accountable for these actions for security analysis, we have no idea which tools should be trusted to provide the correct set of information. Also, it is hard to compare the tools as there is not much common standard around. With the above need in mind, this thesis concentrate on providing an automated system ProvMark to benchmark the tools. This helps to show the strengths and weaknesses of their provenance results in different scenarios. It also allows tool developers to verify their tools and allows end-users to compare the tools at the same level to choose a suitable one for the purpose. As a whole, the benchmarking based on the expressiveness of the tools on different scenarios shows us the right choice of provenance tools on specific usage

    Short-term effects of a gain-focused reappraisal intervention for dementia caregivers: A double-blind cluster-randomized controlled trial

    Get PDF
    Objectives To examine the effects of a benefit-finding intervention, the key feature being the use of gain-focused reappraisal strategies to find positive meanings and benefits in caring for someone with dementia. Design: Cluster-randomized double-blind controlled trial. Setting: Social centers and clinics. Participants: 129 caregivers. Inclusion criteria were (a) primary caregiver aged 18+ and without cognitive impairment, (b) providing ≥14 care hours per week to a relative with mild-to-moderate Alzheimer's disease, and (c) scoring ≥3 on the Hamilton Depression Rating Scale. Exclusion criterion was care-recipient having parkinsonism or other forms of dementia. Interventions: The benefit-finding intervention was evaluated against two treatment-as-usuals, namely, simplified psychoeducation (lectures only) and standard psychoeducation. Each intervention lasted eight weeks, with a 2-hour session per week. Randomization into these conditions was based on center/clinic membership. Measurements: Primary outcome was depressive symptom. Secondary outcomes were Zarit Burden Interview, role overload, and psychological well-being. Self-efficacy beliefs and positive gains were treated as mediators. Measures were collected at baseline and posttreatment. Results: Regression analyses showed BF treatment effects on all outcomes when compared with SIM-PE, and effects on depressive symptoms and Zarit burden when compared with STD-PE. Effect sizes were medium-to-large for depressive symptoms (d=-0.77– -0.96), and medium for the secondary outcomes (d=|0.42–0.65|). Furthermore, using the bootstrapping method, we found significant mediating effects by self-efficacy in controlling upsetting thoughts and positive gains, with the former being the primary mediator. Conclusions: Finding positive gains reduces depressive symptoms and burden and promotes psychological well-being primarily through enhancing self-efficacy in controlling upsetting thoughts

    Nonparametric Likelihood Ratio Test for Univariate Shape-constrained Densities

    Full text link
    We provide a comprehensive study of a nonparametric likelihood ratio test on whether a random sample follows a distribution in a prespecified class of shape-constrained densities. While the conventional definition of likelihood ratio is not well-defined for general nonparametric problems, we consider a working sub-class of alternative densities that leads to test statistics with desirable properties. Under the null, a scaled and centered version of the test statistic is asymptotic normal and distribution-free, which comes from the fact that the asymptotic dominant term under the null depends only on a function of spacings of transformed outcomes that are uniform distributed. The nonparametric maximum likelihood estimator (NPMLE) under the hypothesis class appears only in an average log-density ratio which often converges to zero at a faster rate than the asymptotic normal term under the null, while diverges in general test so that the test is consistent. The main technicality is to show these results for log-density ratio which requires a case-by-case analysis, including new results for k-monotone densities with unbounded support and completely monotone densities that are of independent interest. A bootstrap method by simulating from the NPMLE is shown to have the same limiting distribution as the test statistic

    Relationship between cortical thickness and neuropsychological performance in normal older adults and those with mild cognitive impairment

    Get PDF
    Mild cognitive impairment (MCI) has been extensively investigated in recent decades to identify groups with a high risk of dementia and to establish effective prevention methods during this period. Neuropsychological performance and cortical thickness are two important biomarkers used to predict progression from MCI to dementia. This study compares the cortical thickness and neuropsychological performance in people with MCI and cognitively healthy older adults. We further focus on the relationship between cortical thickness and neuropsychological performance in these two groups. Forty-nine participants with MCI and 40 cognitively healthy older adults were recruited. Cortical thickness was analysed with semiautomatic software, Freesurfer. The analysis reveals that the cortical thickness in the left caudal anterior cingulate (p=0.041), lateral occipital (p=0.009) and right superior temporal (p=0.047) areas were significantly thinner in the MCI group after adjustment for age and education. Almost all neuropsychological test results (with the exception of forward digit span) were significantly correlated to cortical thickness in the MCI group after adjustment for age, gender and education. In contrast, only the score on the Category Verbal Fluency Test and the forward digit span were found to have significant inverse correlations to cortical thickness in the control group of cognitively healthy older adults. The study results suggest that cortical thinning in the temporal region reflects the global change in cognition in subjects with MCI and may be useful to predict progression of MCI to Alzheimer's disease. The different pattern in the correlation of cortical thickness to the neuropsychological performance of patients with MCI from the healthy control subjects may be explained by the hypothesis of MCI as a disconnection syndrome

    ProvMark:A Provenance Expressiveness Benchmarking System

    Get PDF
    System level provenance is of widespread interest for applications such as security enforcement and information protection. However, testing the correctness or completeness of provenance capture tools is challenging and currently done manually. In some cases there is not even a clear consensus about what behavior is correct. We present an automated tool, ProvMark, that uses an existing provenance system as a black box and reliably identifies the provenance graph structure recorded for a given activity, by a reduction to subgraph isomorphism problems handled by an external solver. ProvMark is a beginning step in the much needed area of testing and comparing the expressiveness of provenance systems. We demonstrate ProvMark's usefuless in comparing three capture systems with different architectures and distinct design philosophies.Comment: To appear, Middleware 201
    • …
    corecore